DarkWeb代表了一个用于非法活动的温床,用户在不同的市场论坛上进行交流以交换商品和服务。执法机构从执行作者分析的法医工具中受益,以根据其文本内容识别和配置用户。然而,传统上使用文学文本(例如小说或粉丝小说中的片段)对作者身份分析进行了研究,这些文字在网络犯罪背景下可能不合适。此外,使用撰稿人分析工具进行网络犯罪的少数作品通常采用临时实验设置和数据集。为了解决这些问题,我们发布了Veridark:由三个大规模作者身份验证数据集和一个从用户活动中从黑暗网络相关的Reddit社区或流行的非法黑暗网络市场论坛获得的基准组成的基准。我们在三个数据集上评估竞争性NLP基准,并对预测进行分析,以更好地了解此类方法的局限性。我们在https://github.com/bit-ml/veridark上公开提供数据集和基线
translated by 谷歌翻译
分析数据的分布转移是当今机器学习的一个不断增长的研究方向,从而导致新的基准分析,重点是提供用于研究ML模型的概括属性的合适场景。现有的基准将重点放在监督的学习上,据我们所知,没有任何不受监督的学习。因此,我们引入了一个无监督的异常检测基准,其数据随着时间的流逝而变化,该数据随着时间的推移而变化,该数据是在京都-2006+上建立的,这是一个用于网络入侵检测的流量数据集。这种数据符合转移输入分布的前提:它涵盖了较大的时间跨度($ 10美元),随着时间的推移,自然发生的变化(\ eg用户正在修改其行为模式和软件更新)。我们首先使用基本的均衡分析,T-SNE和最佳运输方法来强调数据的非平稳性质,以测量年份之间的整体分布距离。接下来,我们提出AnoShift,该协议将数据分配为IID,近乎远距离测试拆分。我们通过不同的模型(传统到经典隔离林)来验证随时间推移的性能降解。最后,我们表明,通过确认分配转移问题并正确解决该问题,与经典的IID培训相比,性能可以提高(平均最高3美元\%$)。数据集和代码可在https://github.com/bit-ml/anoshift/上找到。
translated by 谷歌翻译
识别文本跨越几十年的作者的任务,并使用语言学,统计数据,更新,最近,机器学习。灵感灵感来自广泛的自然语言处理任务的令人印象深刻的性能增益,并通过持续的潘大型作者数据集的可用性,我们首先研究几个伯特式变压器的有效性,以便为作者验证的任务。这些模型证明了始终如一地达到非常高的分数。接下来,我们经验证明他们专注于局部线索而不是作者写作风格特征,利用数据集中的现有偏差。为了解决这个问题,我们为PAN-2020提供了新的分割,其中培训和测试数据从不相交的主题或作者采样。最后,我们介绍了DarkRedDit,一个具有不同输入数据分发的数据集。我们进一步使用它来分析低数据制度中模型的域泛化性能,以及在使用所提出的PAN-2020分割时如何变化,以进行微调。我们表明这些分割可以提高模型的模型,以通过新的,显着不同的数据集传输知识。
translated by 谷歌翻译
The purpose of this work was to tackle practical issues which arise when using a tendon-driven robotic manipulator with a long, passive, flexible proximal section in medical applications. A separable robot which overcomes difficulties in actuation and sterilization is introduced, in which the body containing the electronics is reusable and the remainder is disposable. A control input which resolves the redundancy in the kinematics and a physical interpretation of this redundancy are provided. The effect of a static change in the proximal section angle on bending angle error was explored under four testing conditions for a sinusoidal input. Bending angle error increased for increasing proximal section angle for all testing conditions with an average error reduction of 41.48% for retension, 4.28% for hysteresis, and 52.35% for re-tension + hysteresis compensation relative to the baseline case. Two major sources of error in tracking the bending angle were identified: time delay from hysteresis and DC offset from the proximal section angle. Examination of these error sources revealed that the simple hysteresis compensation was most effective for removing time delay and re-tension compensation for removing DC offset, which was the primary source of increasing error. The re-tension compensation was also tested for dynamic changes in the proximal section and reduced error in the final configuration of the tip by 89.14% relative to the baseline case.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Harmonic functions are abundant in nature, appearing in limiting cases of Maxwell's, Navier-Stokes equations, the heat and the wave equation. Consequently, there are many applications of harmonic functions, spanning applications from industrial process optimisation to robotic path planning and the calculation of first exit times of random walks. Despite their ubiquity and relevance, there have been few attempts to develop effective means of representing harmonic functions in the context of machine learning architectures, either in machine learning on classical computers, or in the nascent field of quantum machine learning. Architectures which impose or encourage an inductive bias towards harmonic functions would facilitate data-driven modelling and the solution of inverse problems in a range of applications. For classical neural networks, it has already been established how leveraging inductive biases can in general lead to improved performance of learning algorithms. The introduction of such inductive biases within a quantum machine learning setting is instead still in its nascent stages. In this work, we derive exactly-harmonic (conventional- and quantum-) neural networks in two dimensions for simply-connected domains by leveraging the characteristics of holomorphic complex functions. We then demonstrate how these can be approximately extended to multiply-connected two-dimensional domains using techniques inspired by domain decomposition in physics-informed neural networks. We further provide architectures and training protocols to effectively impose approximately harmonic constraints in three dimensions and higher, and as a corollary we report divergence-free network architectures in arbitrary dimensions. Our approaches are demonstrated with applications to heat transfer, electrostatics and robot navigation, with comparisons to physics-informed neural networks included.
translated by 谷歌翻译
An approach to evolutionary ensemble learning for classification is proposed in which boosting is used to construct a stack of programs. Each application of boosting identifies a single champion and a residual dataset, i.e. the training records that thus far were not correctly classified. The next program is only trained against the residual, with the process iterating until some maximum ensemble size or no further residual remains. Training against a residual dataset actively reduces the cost of training. Deploying the ensemble as a stack also means that only one classifier might be necessary to make a prediction, so improving interpretability. Benchmarking studies are conducted to illustrate competitiveness with the prediction accuracy of current state-of-the-art evolutionary ensemble learning algorithms, while providing solutions that are orders of magnitude simpler. Further benchmarking with a high cardinality dataset indicates that the proposed method is also more accurate and efficient than XGBoost.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
已经提出了多种对抗性攻击,并使用图像和音频数据进行了探索。众所周知,当攻击者可以直接操纵模型的输入时,这些攻击很容易生成,但是在现实世界中实施更加困难。在本文中,我们提出了通用的,对通用时间序列数据的通用时间不变攻击,以便该攻击具有主要由原始数据中存在的频率组成的频谱。攻击的通用性使其快速,易于实现,因为不需要将其添加到输入中,而时间不变性对于现实世界部署很有用。此外,频率约束确保攻击可以承受过滤。我们证明了攻击在两个不同领域的有效性,即语音识别和意外的辐射排放,并表明该攻击对共同的转换和能力防御管道是有力的。
translated by 谷歌翻译
太阳能动力学天文台(SDO)是NASA多光谱十年的长达任务,每天都在日常产生来自Sun的观测数据的trabytes,以证明机器学习方法的潜力并铺路未来深空任务计划的方式。特别是,在最近的几项研究中提出了使用图像到图像翻译实际上产生极端超紫罗兰通道的想法,这是一种增强任务较少通道的提高任务的方法,并且由于低下链接而减轻了挑战。深空的速率。本文通过关注四个通道和基于编码器的建筑的排列来研究这种深度学习方法的潜力和局限性,并特别注意太阳表面的形态特征和亮度如何影响神经网络预测。在这项工作中,我们想回答以下问题:可以将通过图像到图像翻译产生的太阳电晕的合成图像用于太阳的科学研究吗?分析强调,神经网络在计数率(像素强度)上产生高质量的图像,通常可以在1%误差范围内跨通道跨通道重现协方差。但是,模型性能在极高的能量事件(如耀斑)的对应关系中大大减少,我们认为原因与此类事件的稀有性有关,这对模型训练构成了挑战。
translated by 谷歌翻译